389 research outputs found

    A posteriori multi-stage optimal trading under transaction costs and a diversification constraint

    Get PDF
    This paper presents a simple method for a posteriori (historical) multi-variate multi-stage optimal trading under transaction costs and a diversification constraint. Starting from a given amount of money in some currency, we analyze the stage-wise optimal allocation over a time horizon with potential investments in multiple currencies and various assets. Three variants are discussed, including unconstrained trading frequency, a fixed number of total admissable trades, and the waiting of a specific time-period after every executed trade until the next trade. The developed methods are based on efficient graph generation and consequent graph search, and are evaluated quantitatively on real-world data. The fundamental motivation of this work is preparatory labeling of financial time-series data for supervised machine learning.Comment: 25 pages, 4 figures, 6 table

    A Convex Feasibility Approach to Anytime Model Predictive Control

    Full text link
    This paper proposes to decouple performance optimization and enforcement of asymptotic convergence in Model Predictive Control (MPC) so that convergence to a given terminal set is achieved independently of how much performance is optimized at each sampling step. By embedding an explicit decreasing condition in the MPC constraints and thanks to a novel and very easy-to-implement convex feasibility solver proposed in the paper, it is possible to run an outer performance optimization algorithm on top of the feasibility solver and optimize for an amount of time that depends on the available CPU resources within the current sampling step (possibly going open-loop at a given sampling step in the extreme case no resources are available) and still guarantee convergence to the terminal set. While the MPC setup and the solver proposed in the paper can deal with quite general classes of functions, we highlight the synthesis method and show numerical results in case of linear MPC and ellipsoidal and polyhedral terminal sets.Comment: 8 page

    Direct data-driven control of constrained linear parameter-varying systems: A hierarchical approach

    Get PDF
    In many nonlinear control problems, the plant can be accurately described by a linear model whose operating point depends on some measurable variables, called scheduling signals. When such a linear parameter-varying (LPV) model of the open-loop plant needs to be derived from a set of data, several issues arise in terms of parameterization, estimation, and validation of the model before designing the controller. Moreover, the way modeling errors affect the closed-loop performance is still largely unknown in the LPV context. In this paper, a direct data-driven control method is proposed to design LPV controllers directly from data without deriving a model of the plant. The main idea of the approach is to use a hierarchical control architecture, where the inner controller is designed to match a simple and a-priori specified closed-loop behavior. Then, an outer model predictive controller is synthesized to handle input/output constraints and to enhance the performance of the inner loop. The effectiveness of the approach is illustrated by means of a simulation and an experimental example. Practical implementation issues are also discussed.Comment: Preliminary version of the paper "Direct data-driven control of constrained systems" published in the IEEE Transactions on Control Systems Technolog

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension

    E.P. v. Alaska Psychiatric Institute: The Evolution of Involuntary Civil Commitments from Treatment to Punishment

    Get PDF
    Addresses the problem of identification of hybrid dynamical systems, by focusing the attention on hinging hyperplanes and Wiener piecewise affine autoregressive exogenous models. In particular, we provide algorithms based on mixed-integer linear or quadratic programming which are guaranteed to converge to a global optimu

    Douglas-Rachford Splitting: Complexity Estimates and Accelerated Variants

    Full text link
    We propose a new approach for analyzing convergence of the Douglas-Rachford splitting method for solving convex composite optimization problems. The approach is based on a continuously differentiable function, the Douglas-Rachford Envelope (DRE), whose stationary points correspond to the solutions of the original (possibly nonsmooth) problem. By proving the equivalence between the Douglas-Rachford splitting method and a scaled gradient method applied to the DRE, results from smooth unconstrained optimization are employed to analyze convergence properties of DRS, to tune the method and to derive an accelerated version of it

    Recurrent Neural Network Training with Convex Loss and Regularization Functions by Extended Kalman Filtering

    Full text link
    This paper investigates the use of extended Kalman filtering to train recurrent neural networks with rather general convex loss functions and regularization terms on the network parameters, including â„“1\ell_1-regularization. We show that the learning method is competitive with respect to stochastic gradient descent in a nonlinear system identification benchmark and in training a linear system with binary outputs. We also explore the use of the algorithm in data-driven nonlinear model predictive control and its relation with disturbance models for offset-free closed-loop tracking.Comment: 21 pages, 3 figures, submitted for publicatio

    Active Learning for Regression by Inverse Distance Weighting

    Full text link
    This paper proposes an active learning (AL) algorithm to solve regression problems based on inverse-distance weighting functions for selecting the feature vectors to query. The algorithm has the following features: (i) supports both pool-based and population-based sampling; (ii) it is not tailored to a particular class of predictors; (iii) can handle known and unknown constraints on the queryable feature vectors; and (iv) can run either sequentially, or in batch mode, depending on how often the predictor is retrained. The potentials of the method are shown in numerical tests on illustrative synthetic problems and real-world datasets from the UCI repository. A Python implementation of the algorithm, that we call IDEAL (Inverse-Distance based Exploration for Active Learning), is available at \url{http://cse.lab.imtlucca.it/~bemporad/ideal}.Comment: 21 pages, 9 figures. Submitted for publicatio
    • …
    corecore